摘要 :
Previous work on computer architecture design for robotics is reviewed. Based on the analysis of the performance data, two crucial issued are addressed: the decision on how specific an architecture should be and which architecture...
展开
Previous work on computer architecture design for robotics is reviewed. Based on the analysis of the performance data, two crucial issued are addressed: the decision on how specific an architecture should be and which architecture styles should be chosen for particular applications.
收起
摘要 :
State-of-the-art systems for semantic image segmentation use feed-forward pipelines with fixed computational costs. Building an image segmentation system that works across a range of computational budgets is challenging and time-i...
展开
State-of-the-art systems for semantic image segmentation use feed-forward pipelines with fixed computational costs. Building an image segmentation system that works across a range of computational budgets is challenging and time-intensive as new architectures must be designed and trained for every computational setting. To address this problem we develop a recurrent neural network that successively improves prediction quality with each iteration. Importantly, the RNN may be deployed across a range of computational budgets by merely running the model for a variable number of iterations. We find that this architecture is uniquely suited for efficiently segmenting videos. By exploiting the segmentation of past frames, the RNN can perform video segmentation at similar quality but reduced computational cost compared to state-of-the-art image segmentation methods. When applied to static images in the PASCAL VOC 2012 and Cityscapes segmentation datasets, the RNN traces out a speed-accuracy curve that saturates near the performance of state-of-the-art segmentation methods.
收起
摘要 :
Most of the autonomic cloud computing architectures are either a domain specific architecture or focus on certain properties of autonomic computing. In addition, they do not concentrate on the core issues related to the design and...
展开
Most of the autonomic cloud computing architectures are either a domain specific architecture or focus on certain properties of autonomic computing. In addition, they do not concentrate on the core issues related to the design and architectural concerns with respect to autonomic cloud computing in which the cloud can manage itself. In this paper, we propose a generic software architectural style for autonomic cloud computing systems that is based on a simplified layered approach. The proposed architectural style consists of five layers in which the bottom layer consists of cloud hardware/software resources, the second layer consists of a virtual machine that provides flexibility to service providers to utilize cloud resources, the third layer consists of an autonomic manager that manages cloud services, the fourth layer consists of a cloud service provider which provides services to cloud clients, and finally, the fifth and top layer represents the client layer that enables users to utilize the provided cloud services. This architectural style is a flexible and expandable software architecture solution for autonomic cloud computing systems, in which the service providers in the cloud can integrate their services within the architecture of the cloud computing software system. Additionally, this architecture enables the software architects to design and model their cloud computing software system in a flexible way that will maximize the reuse of existing cloud software components within their software system.
收起
摘要 :
Memory-CPU single communication channel bottleneck of the von Neumann architecture is quickly stalling the growth of computer processors. A probable solution to this problem is to fuse processing and memory elements. A simple low ...
展开
Memory-CPU single communication channel bottleneck of the von Neumann architecture is quickly stalling the growth of computer processors. A probable solution to this problem is to fuse processing and memory elements. A simple low latency single on-chip memory and processor cannot solve the problem as the fundamental channel bottleneck will still be there due to the logical splitting of processor and memory. This paper presents that a paradigm shift is possible by combining Arithmetic logic unit and Random Access Memory (ARAM) elements at bit level. This bit level modest ARAM is used to perform word level ALU instructions with minor modifications. This makes the ARAM cells capable of executing instructions in parallel. It is also asynchronous and hence reduces power consumption significantly. A CMOS implementation is presented that verifies the practicality of the proposed ARAM.
收起
摘要 :
Memory-CPU single communication channel bottleneck of the von Neumann architecture is quickly stalling the growth of computer processors. A probable solution to this problem is to fuse processing and memory elements. A simple low ...
展开
Memory-CPU single communication channel bottleneck of the von Neumann architecture is quickly stalling the growth of computer processors. A probable solution to this problem is to fuse processing and memory elements. A simple low latency single on-chip memory and processor cannot solve the problem as the fundamental channel bottleneck will still be there due to the logical splitting of processor and memory. This paper presents that a paradigm shift is possible by combining Arithmetic logic unit and Random Access Memory (ARAM) elements at bit level. This bit level modest ARAM is used to perform word level ALU instructions with minor modifications. This makes the ARAM cells capable of executing instructions in parallel. It is also asynchronous and hence reduces power consumption significantly. A CMOS implementation is presented that verifies the practicality of the proposed ARAM.
收起
摘要 :
Since the proliferation of fog computing, various distributed architectures have been proposed to extend the cloud to the edge of the network. However, so far there exists no study that compares different fog computing architectur...
展开
Since the proliferation of fog computing, various distributed architectures have been proposed to extend the cloud to the edge of the network. However, so far there exists no study that compares different fog computing architectures, and produces quantitative results in order to examine the efficiency of each architecture for different use cases. Such a study could provide guidelines for selecting an appropriate distributed architecture for fog computing while taking into account the requirements of the final applications.To bridge this gap in the literature, we create a unified system model which is able to represent the basic architectures commonly used for fog computing, i.e., hierarchical and flat. Furthermore, we design algorithms that can be used for creating fog computing systems that follow these architectures, and we perform various experiments that focus on communication latency and bandwidth utilization. Notably, our results show that for applications that do not have a dependency on the cloud, i.e., no resource-demanding tasks are involved, the hierarchical architecture reduces the communication latency by 13% compared to the flat. However, for applications that also include resource-demanding tasks, the flat architecture reduces the communication latency by 16% compared to the hierarchical.
收起
摘要 :
Since the proliferation of fog computing, various distributed architectures have been proposed to extend the cloud to the edge of the network. However, so far there exists no study that compares different fog computing architectur...
展开
Since the proliferation of fog computing, various distributed architectures have been proposed to extend the cloud to the edge of the network. However, so far there exists no study that compares different fog computing architectures, and produces quantitative results in order to examine the efficiency of each architecture for different use cases. Such a study could provide guidelines for selecting an appropriate distributed architecture for fog computing while taking into account the requirements of the final applications.To bridge this gap in the literature, we create a unified system model which is able to represent the basic architectures commonly used for fog computing, i.e., hierarchical and flat. Furthermore, we design algorithms that can be used for creating fog computing systems that follow these architectures, and we perform various experiments that focus on communication latency and bandwidth utilization. Notably, our results show that for applications that do not have a dependency on the cloud, i.e., no resource-demanding tasks are involved, the hierarchical architecture reduces the communication latency by 13% compared to the flat. However, for applications that also include resource-demanding tasks, the flat architecture reduces the communication latency by 16% compared to the hierarchical.
收起
摘要 :
This paper describes a project undertaken to explore reconfigurable computing as a means to achieve high-throughput, low-power on-board computing for spacecraft. The solution consists of a reconfigurable data processor chip, a rec...
展开
This paper describes a project undertaken to explore reconfigurable computing as a means to achieve high-throughput, low-power on-board computing for spacecraft. The solution consists of a reconfigurable data processor chip, a reconfigurable memory module, reconfigurable interconnect, and dynamic power management. The reconfigurable processor chip was fabricated in a 0.25μ bulk CMOS process using a radiation-hard-by-design standard cell library. Two challenge algorithms were demonstrated in hardware, and a dozen others in software simulation. It was shown to achieve up to 3 giga-operations/second-watt. This architecture is well-suited to future generations of ultra-low-power, low-voltage processors and memories, as the extensibility offsets the loss in throughput due to low-voltage.
收起
摘要 :
Over the past decade, the software industry has steadily moved from large monolithic code repositories to small modular libraries. Micro services are the latest evolution in this transformation. Micro services are software systems...
展开
Over the past decade, the software industry has steadily moved from large monolithic code repositories to small modular libraries. Micro services are the latest evolution in this transformation. Micro services are software systems specially designed to do one thing and only one thing but do it well. Internet of Things systems are usually designed to operate in the field without manual supervision. As such these systems need to have sufficient software functionality to display fault tolerance, error state recovery and operational consistency. This project is designed to deliver these core Internet of Things functionalities through an ecosystem of micro services. Specifically the project will focus on an in field gateway device – in this case a Raspberry Pi and implement micro services for environmental monitoring, command and control, device registration and on-boarding and over the air software upgrades. These systems will empower the gateway device to successfully integrate itself into a cloud connected content platform. For the purposes of this project all cloud components will be orchestrated on Amazon Web Services, the gateway device will be a Raspberry Pi Model B and the micro services built in Java.
收起
摘要 :
The architectural design studio could be considered with no doubt as the principal pillar of the architectural education process, and it plays one of the most important roles in developing this process. Regardless of the rise of t...
展开
The architectural design studio could be considered with no doubt as the principal pillar of the architectural education process, and it plays one of the most important roles in developing this process. Regardless of the rise of the use of computing in architecture profession, the main focus -at least in developing countries- is on presentations as compared to other realms of the practice. The current research presents an experimental model to use one of the computer applications as a designing tool in different architectural design stages, starting with creating and developing a concept to final stages of project design, including project environmental studies. The research methodology can be summarized in the theoretical preface that explains stages of the architectural design process and with reference to computer applications in each design process stage, and then it'll consent with the experimental model, definition of the used software and applied mechanisms with the display of application model output which can be analyzed and evaluated.
收起